6 research outputs found
Recommended from our members
Style-driven Shape Analysis and Synthesis
In this dissertation I will investigate algorithms that analyze stylistic properties of 3D shapes and automatically synthesize shapes given style specifications. I will start by introducing a structure-transcending method for style similarity evaluation between 3D shapes. Inspired by observations about style similarity in art history literature, we propose an algorithmically computed style similarity measure which identifies style related elements on the analyzed models and collates element-level geometric similarity measurements into an object-level style measure consistent with human perception. To achieve this consistency we employ crowdsourcing to learn the relative perceptual importance of a range of elementary shape distances and other parameters used in our measurement from participant answers to cross-structure style similarity queries. I will then describe an algorithm that utilizes this learned style similarity measure to synthesize 3D models of man-made shapes. The algorithm combines user-specified style, described via an exemplar shape, and functionality, encoded by a functionally different target shape. We transfer the exemplar style to the target via a sequence of compatible element-level operations where the compatibility is a learned metric that estimates the impact of each operation on the edited shape. We use this metric to cast style transfer as a tabu search, which incrementally updates the target shape using compatible operations, progressively increasing its style similarity to the exemplar while strictly maintaining its functionality at each step. Finally I will propose a method for reconstructing 3D shapes following style aspects of given 2D drawings. Our method takes line drawings as input and converts them into surface depth and normal maps from several output viewpoints via a deep convolutional neural network with multi-view encoder-decoder architecture. The multi-view maps are then consolidated into a dense coherent 3D point cloud by solving an optimization problem that fuses depth and normal information across all output viewpoints. The output point cloud is then converted into a polygon mesh representation, which is further fine-tuned to match the input sketch more precisely
3D Shape Reconstruction from Sketches via Multi-view Convolutional Networks
We propose a method for reconstructing 3D shapes from 2D sketches in the form
of line drawings. Our method takes as input a single sketch, or multiple
sketches, and outputs a dense point cloud representing a 3D reconstruction of
the input sketch(es). The point cloud is then converted into a polygon mesh. At
the heart of our method lies a deep, encoder-decoder network. The encoder
converts the sketch into a compact representation encoding shape information.
The decoder converts this representation into depth and normal maps capturing
the underlying surface from several output viewpoints. The multi-view maps are
then consolidated into a 3D point cloud by solving an optimization problem that
fuses depth and normals across all viewpoints. Based on our experiments,
compared to other methods, such as volumetric networks, our architecture offers
several advantages, including more faithful reconstruction, higher output
surface resolution, better preservation of topology and shape structure.Comment: 3DV 2017 (oral
Learning to Group Discrete Graphical Patterns
International audienceWe introduce a deep learning approach for grouping discrete patterns commonin graphical designs. Our approach is based on a convolutional neural networkarchitecture that learns a grouping measure defined over a pair of patternelements. Motivated by perceptual grouping principles, the key feature ofour network is the encoding of element shape, context, symmetries, andstructural arrangements. These element properties are all jointly consideredand appropriately weighted in our grouping measure. To better align ourmeasure with human perceptions for grouping, we train our network on a large,human-annotated dataset of pattern groupings consisting of patterns at varyinggranularity levels, with rich element relations and varieties, and temperedwith noise and other data imperfections. Experimental results demonstratethat our deep-learned measure leads to robust grouping results
Mechanical manipulation for ordered topological defects
Randomly distributed topological defects created during the spontaneous symmetry breaking are the fingerprints to trace the evolution of symmetry, range of interaction, and order parameters in condensed matter systems. However, the effective mean to manipulate topological defects into ordered form is elusive due to the topological protection. Here, we establish a strategy to effectively align the topological domain networks in hexagonal manganites through a mechanical approach. It is found that the nanoindentation strain gives rise to a threefold Magnus-type force distribution, leading to a sixfold symmetric domain pattern by driving the vortex and antivortex in opposite directions. On the basis of this rationale, sizeable mono-chirality topological stripe is readily achieved by expanding the nanoindentation to scratch, directly transferring the randomly distributed topological defects into an ordered form. This discovery provides a mechanical strategy to manipulate topological protected domains not only on ferroelectrics but also on ferromagnets/antiferromagnets and ferroelastics.</p